Predict Responsibly: Increasing Fairness by Learning To Defer
نویسندگان
چکیده
Machine learning systems, which are often used for high-stakes decisions, suffer from two mutually reinforcing problems: unfairness and opaqueness. Many popular models, although generally accurate, cannot express uncertainty about their predictions. Even in regimes where a model is inaccurate, users may trust the model’s predictions too fully, and allow its biases to reinforce the user’s own. In this work, we explore models that learn to defer. In our scheme, a model learns to classify accurately and fairly, but also to defer if necessary, passing judgment to a downstream decision-maker such as a human user. We further propose a learning algorithm which accounts for potential biases held by decision-makers later in a pipeline. Experiments on real-world datasets demonstrate that learning to defer can make a model not only more accurate but also less biased. Even when operated by highly biased users, we show that deferring models can still greatly improve the fairness of the entire pipeline.
منابع مشابه
ECS: An enhanced carrier sensing mechanism for wireless ad hoc networks
In wireless ad-hoc networks, whenever a node overhears a frame, the node should defer its transmission to prevent the interference with the ongoing transmission. The exact duration value by which the node should defer is contained in the frame. However, due to the wireless transmission errors and due to the fact that the carrier sensing range is normally greater than the transmission range, a f...
متن کاملUsing Machine Learning ARIMA to Predict the Price of Cryptocurrencies
The increasing volatility in pricing and growing potential for profit in digital currency have made predicting the price of cryptocurrency a very attractive research topic. Several studies have already been conducted using various machine-learning models to predict crypto currency prices. This study presented in this paper applied a classic Autoregressive Integrated Moving Average(ARIMA) model ...
متن کاملFairness in Relational Domains
AI and machine learning tools are being used with increasing frequency for decision making in domains that affect peoples’ lives such as employment, education, policing and loan approval. These uses raise concerns about biases of algorithmic discrimination and have motivated the development of fairness-aware machine learning. However, existing fairness approaches are based solely on attributes ...
متن کاملAre groups more likely to defer choice than their members ?
When faced with a choice, people can normally select no option, i.e., defer choice. Previous research has investigated when and why individuals defer choice, but has almost never looked at these questions when groups of people make choices. Separate reasons predict that groups may be equally likely, more likely, or less likely than individuals to defer choice. We re-analyzed some previously pub...
متن کاملFair Processes for Priority Setting: Putting Theory into Practice; Comment on “Expanded HTA: Enhancing Fairness and Legitimacy”
Embedding health technology assessment (HTA) in a fair process has great potential to capture societal values relevant to public reimbursement decisions on health technologies. However, the development of such processes for priority setting has largely been theoretical. In this paper, we provide further practical lead ways on how these processes can be implemented. We first present the misconce...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1711.06664 شماره
صفحات -
تاریخ انتشار 2017